Dynamic-static unsupervised sequentiality, statistical subunits and lexicon for sign language recognition

نویسندگان

  • Stavros Theodorakis
  • Vassilis Pitsikalis
  • Petros Maragos
چکیده

a r t i c l e i n f o We introduce a new computational phonetic modeling framework for sign language (SL) recognition. This is based on dynamic–static statistical subunits and provides sequentiality in an unsupervised manner, without prior linguistic information. Subunit " sequentiality " refers to the decomposition of signs into two types of parts, varying and non-varying, that are sequentially stacked across time. Our approach is inspired by the Move-ment–Hold SL linguistic model that refers to such sequences. First, we segment signs into intra-sign primitives, and classify each segment as dynamic or static, i.e., movements and non-movements. These segments are then clustered appropriately to construct a set of dynamic and static subunits. The dynamic/static discrimination allows us employing different visual features for clustering the dynamic or static segments. Sequences of the generated subunits are used as sign pronunciations in a data-driven lexicon. Based on this lexicon and the corresponding segmentation, each subunit is statistically represented and trained on multimodal sign data as a hidden Markov model. In the proposed approach, dynamic/static sequentiality is incorporated in an unsupervised manner. Further, handshape information is integrated in a parallel hidden Markov modeling scheme. The novel sign language modeling scheme is evaluated in recognition experiments on data from three corpora and two sign languages: Boston University American SL which is employed pre-segmented at the sign-level, Greek SL Lemmas, and American SL Large Vocabulary Dictionary, including both signer dependent and unseen signers' testing. Results show consistent improvements when compared with other approaches, demonstrating the importance of dynamic/static structure in sub-sign phonetic modeling. Sign languages are natural languages that manifest themselves via the visual modality in the 3D space. They convey information via visual patterns and serve for communication in parts of Deaf communities [2]. Visual patterns are formed by manual and non-manual cues. The automatic processing of such visual patterns for the Automatic Sign Language Recognition (ASLR) can bridge the communication gap between the deaf and the hearing. Since the early work of [3], there has been progress in visual processing, sign language phonetic modeling, and automatic recognition [1,4,5]. Moreover ASLR may contribute to other disciplines such as linguistics for the study of Sign Languages (SLs), via automated processing of corpora, whereas it is broadly related to human computer interaction. Herein we focus on sign language articulation produced by manual cues. The term " manual cues " refers to the movements and …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Real-Time Continuous Gesture Recognition System for Sign Language

In this paper, a large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a DataGlove. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to 4 parameters in a gesture : posture, position, orientation, and motion. We have implemented a proto...

متن کامل

Sign Language Recognition: State of the Art

Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognit...

متن کامل

Applying mean shift and motion detection approaches to hand tracking in sign language

Hand gesture recognition is very important to communicate in sign language. In this paper, an effective object tracking and hand gesture recognition method is proposed. This method is combination of two well-known approaches, the mean shift and the motion detection algorithm. The mean shift algorithm can track objects based on the color, then when hand passes the face occlusion happens. Several...

متن کامل

Segmentation of One and Two Hand Gesture Recognition using Key Frame Selection

The sign language recognition is the most popular research area involving computer vision, pattern recognition and image processing. It enhances communication capabilities of the mute person. In this paper, we present an object based key frame selection and skin colour segmentation uniform and no uniform background for one and two hand gesture recognition. Experimental results demonstrate the e...

متن کامل

Key Frame Detection Algorithm based on Dynamic Sign Language Video for the Non Specific Population

The current recognition algorithms of sign language, or can only identify static gestures, or need data gloves, position sensor and other additional auxiliary equipments, which are only used for laboratory research and some special occasions. Therefore, they are not conducive to the promotion of widely use. A new idea of sign language recognition based on key frames is presented in this paper. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Image Vision Comput.

دوره 32  شماره 

صفحات  -

تاریخ انتشار 2014